skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Chi, Hongmei"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Quantum-based Machine Learning (QML) combines quantum computing (QC) with machine learning (ML), which can be applied in various sectors, and there is a high demand for QML professionals. However, QML is not yet in many schools’ curricula. We design labware for the basic concepts of QC, ML, and QML and their applications in science and engineering fields in Google Colab, applying a three-stage learning strategy for efficient and effective student learning. 
    more » « less
    Free, publicly-accessible full text available February 18, 2026
  2. Free, publicly-accessible full text available January 1, 2026
  3. Wang, Yan; Yang, Hui (Ed.)
    Abstract The scarcity of measured data for defect identification often challenges the development and certification of additive manufacturing processes. Knowledge transfer and sharing have become emerging solutions to small-data challenges in quality control to improve machine learning with limited data, but this strategy raises concerns regarding privacy protection. Existing zero-shot learning and federated learning methods are insufficient to represent, select, and mask data to share and control privacy loss quantification. This study integrates differential privacy in cybersecurity with federated learning to investigate sharing strategies of manufacturing defect ontology. The method first proposes using multilevel attributes masked by noise in defect ontology as the sharing data structure to characterize manufacturing defects. Information leaks due to the sharing of ontology branches and data are estimated by epsilon differential privacy (DP). Under federated learning, the proposed method optimizes sharing defect ontology and image data strategies to improve zero-shot defect classification given privacy budget limits. The proposed framework includes (1) developing a sharing strategy based on multilevel attributes in defect ontology with controllable privacy leaks, (2) optimizing joint decisions in differential privacy, zero-shot defect classification, and federated learning, and (3) developing a two-stage algorithm to solve the joint optimization, combining stochastic gradient descent search for classification models and an evolutionary algorithm for exploring data-sharing strategies. A case study on zero-shot learning of additive manufacturing defects demonstrated the effectiveness of the proposed method in data-sharing strategies, such as ontology sharing, defect classification, and cloud information use. 
    more » « less
    Free, publicly-accessible full text available November 7, 2025
  4. Students, especially those outside the field of cybersecurity, are increasingly turning to Large Language Model (LLM)-based generative AI tools for coding assistance. These AI code generators provide valuable support to developers by generating code based on provided input and instructions. However, the quality and accuracy of the generated code can vary, depending on factors such as task complexity, the clarity of instructions, and the model’s familiarity with the programming language. Additionally, these generated codes may inadvertently utilize vulnerable built-in functions, potentially leading to source code vulnerabilities and exploits. This research undertakes an in-depth analysis and comparison of code generation, code completion, and security suggestions offered by prominent AI models, including OpenAI CodeX, CodeBert, and ChatGPT. The research aims to evaluate the effectiveness and security aspects of these tools in terms of their code generation, code completion capabilities, and their ability to enhance security. This analysis serves as a valuable resource for developers, enabling them to proactively avoid introducing security vulnerabilities in their projects. By doing so, developers can significantly reduce the need for extensive revisions and resource allocation, whether in the short or long term. 
    more » « less